infinite-horizon optimal control

infinite-horizon optimal control
无限水平线最佳控制

English-Chinese computer dictionary (英汉计算机词汇大词典). 2013.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Optimal control — theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his collaborators in the Soviet Union[1] and Richard Bellman in… …   Wikipedia

  • Linear-quadratic-Gaussian control — In control theory, the linear quadratic Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white Gaussian noise, having incomplete state information… …   Wikipedia

  • DNSS point — DNSS points arise in optimal control problems that exhibit multiple optimal solutions. A DNSS point − named alphabetically after Deckert and Nishimura,[1] Sethi,[2][3] and Skiba[4] − is an indifference point in an optimal control problem such… …   Wikipedia

  • LNEMS290 — D.A. Carlson/A. Haurie: Infinite Horizon Optimal Control, Springer Verlag 1987 …   Acronyms

  • LNEMS290 — D.A. Carlson/A. Haurie: Infinite Horizon Optimal Control, Springer Verlag 1987 …   Acronyms von A bis Z

  • Partially observable Markov decision process — A Partially Observable Markov Decision Process (POMDP) is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot… …   Wikipedia

  • Bellman equation — A Bellman equation (also known as a dynamic programming equation), named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes… …   Wikipedia

  • Linear-quadratic regulator — The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic functional is called… …   Wikipedia

  • Dynamic programming — For the programming paradigm, see Dynamic programming language. In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems… …   Wikipedia

  • Gauss pseudospectral method — The Gauss Pseudospectral Method (abbreviated GPM ) is a direct transcription method for discretizing a continuous optimal control problem into a nonlinear program (NLP). The Gauss pseudospectral method differs from several other pseudospectral… …   Wikipedia

  • Markov decision process — Markov decision processes (MDPs), named after Andrey Markov, provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”